Talk by Johannes Burge from the Center for Perceptual Systems, University of Texas, Austin. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley on Sept. 2, 2010.
In a vast number of animals, vision begins with a lens system that focuses light near the plane of retinal photoreceptors. Lenses focus light perfectly from only one distance, and natural scenes contain objects at many distances. Thus, defocus information is nearly always present in images of natural scenes. Defocus information plays a role in many tasks: depth and scale estimation , accommodation control, and eye growth regulation. But little is known about the computations visual systems use to detect and estimate defocus in images of natural scenes. The computer vision literature describes algorithms for defocus estimation. However, they typically require simultaneous multiple images, coded apertures, or structured illuminants projected onto the environment. Mammalian vision systems, for example, lack these advantages. We describe a principled approach for estimating defocus in small regions of single images, given a training set of natural images, a wave-optics model of the lens system, and a photo-sensor array. We demonstrate the approach’s efficacy in a model human visual system. We show that defocus magnitude can be estimated with high precision and essentially no bias. We also show that astigmatism and chromatic aberration resolve the sign ambiguity often attributed to defocus blur; thus, two common image-degrading optical effects have potential adaptive function. The computational approach is general and applies to any environment-vision system pairing: natural or artificial, biological or machine. Our work establishes a principled framework for analyzing the psychophysics and neurophysiology of defocus estimation, and for developing optimal image-based auto-focusing and range estimation algorithms for off-the-shelf imaging devices.